The SARTools R package which generated this report has been developped at PF2 - Institut Pasteur by M.-A. Dillies and H. Varet (hugo.varet@pasteur.fr). Thanks to cite H. Varet, L. Brillet-Guéguen, J.-Y. Coppee and M.-A. Dillies, SARTools: A DESeq2- and EdgeR-Based R Pipeline for Comprehensive Differential Analysis of RNA-Seq Data, PLoS One, 2016, doi: http://dx.doi.org/10.1371/journal.pone.0157022 when using this tool for any analysis published.

1 Introduction

The analyses reported in this document are part of the FoodTrans_metaT_milk_OG_3core project. The aim is to find features that are differentially expressed between All, minus ST, minus 5614, minus 6086, minus 10675 and minus SICO. The statistical analysis process includes data normalization, graphical exploration of raw and normalized data, test for differential expression for each feature between the conditions, raw p-value adjustment and export of lists of features having a significant differential expression between the conditions.

The analysis is performed using the R software [1], Bioconductor [2] packages including DESeq2 [3,4] and the SARTools package developed at PF2 - Institut Pasteur. Normalization and differential analysis are carried out according to the DESeq2 model and package. This report comes with additional tab-delimited text files that contain lists of differentially expressed features.

For more details about the DESeq2 methodology, please refer to its related publications [3,4].

2 Description of raw data

The count data files and associated biological conditions are listed in the following table.

Table 1: Data files and associated biological conditions.
label files condition
sample_1A OG_pan_count_coresample_1A_3.csv All
sample_1B OG_pan_count_coresample_1B_3.csv All
sample_1C OG_pan_count_coresample_1C_3.csv All
sample_2A OG_pan_count_coresample_2A_3.csv minus ST
sample_2B OG_pan_count_coresample_2B_3.csv minus ST
sample_2C OG_pan_count_coresample_2C_3.csv minus ST
sample_3A OG_pan_count_coresample_3A_3.csv minus 5614
sample_3B OG_pan_count_coresample_3B_3.csv minus 5614
sample_3C OG_pan_count_coresample_3C_3.csv minus 5614
sample_4A OG_pan_count_coresample_4A_3.csv minus 6086
sample_4B OG_pan_count_coresample_4B_3.csv minus 6086
sample_4C OG_pan_count_coresample_4C_3.csv minus 6086
sample_5A OG_pan_count_coresample_5A_3.csv minus 10675
sample_5B OG_pan_count_coresample_5B_3.csv minus 10675
sample_5C OG_pan_count_coresample_5C_3.csv minus 10675
sample_6A OG_pan_count_coresample_6A_3.csv minus SICO
sample_6B OG_pan_count_coresample_6B_3.csv minus SICO
sample_6C OG_pan_count_coresample_6C_3.csv minus SICO

After loading the data we first have a look at the raw data table itself. The data table contains one row per annotated feature and one column per sequenced sample. Row names of this table are feature IDs (unique identifiers). The table contains raw count values representing the number of reads that map onto the features. For this project, there are 1247 features in the count data table.

Table 2: Partial view of the count data table.
sample_1A sample_1B sample_1C sample_2A sample_2B sample_2C sample_3A sample_3B sample_3C sample_4A
OG100 521 719 439 1082 1343 747 1043 631 905 850
OG1001 9493 12809 7853 11343 13987 10032 21227 15979 18724 14723
OG1002 191 195 116 597 733 361 371 329 391 202
OG1004 1084 1562 1057 2735 2315 1832 1687 1283 1731 1660
OG1005 1665 2085 1212 2830 3017 2197 3207 2390 2873 1777
OG1006 3424 3816 2473 4093 5159 3209 4875 3432 4176 3493

Looking at the summary of the count table provides a basic description of these raw counts (min and max values, median, etc).

Table 3: Summary of the raw counts.
Min. 1st Qu. Median Mean 3rd Qu. Max.
sample_1A 2 220 661 2193 1851 174825
sample_1B 0 277 878 2840 2379 212450
sample_1C 0 188 593 1885 1582 146740
sample_2A 0 405 1157 3870 3345 256827
sample_2B 0 401 1190 3784 3408 248461
sample_2C 0 278 820 2847 2430 191429
sample_3A 0 324 1019 3500 2957 244449
sample_3B 0 240 757 2662 2253 185929
sample_3C 0 327 980 3434 2852 247354
sample_4A 0 286 924 2917 2497 226095
sample_4B 0 97 319 1000 864 71941
sample_4C 0 321 1014 3260 2769 242166
sample_5A 1 252 719 2062 1877 119867
sample_5B 0 401 1265 3629 3516 214187
sample_5C 0 355 1147 3650 3227 226545
sample_6A 0 245 735 2301 2060 146057
sample_6B 0 260 804 2867 2383 199771
sample_6C 0 377 1144 3854 3338 273349

Figure 1 shows the total number of mapped and counted reads for each sample. We expect total read counts to be similar within conditions, they may be different across conditions. Total counts sometimes vary widely between replicates. This may happen for several reasons, including:

  • different rRNA contamination levels between samples (even between biological replicates);
  • slight differences between library concentrations, since they may be difficult to measure with high precision.
Figure 1: Number of mapped reads per sample. Colors refer to the biological condition of the sample.

Figure 1: Number of mapped reads per sample. Colors refer to the biological condition of the sample.

Figure 2 shows the percentage of features with no read count in each sample. We expect this percentage to be similar within conditions. Features with null read counts in the 18 samples are left in the data but are not taken into account for the analysis with DESeq2. Here, 0 features (0%) are in this situation (dashed line). Results for those features (fold-change and p-values) are set to NA in the results files.

Figure 2: Percentage of features with null read counts in each sample.

Figure 2: Percentage of features with null read counts in each sample.

Figure 3 shows the distribution of read counts for each sample (on a log scale to improve readability). Again we expect replicates to have similar distributions. In addition, this figure shows if read counts are preferably low, medium or high. This depends on the organisms as well as the biological conditions under consideration.

Figure 3: Density distribution of read counts.

Figure 3: Density distribution of read counts.

It may happen that one or a few features capture a high proportion of reads (up to 20% or more). This phenomenon should not influence the normalization process. The DESeq2 normalization has proved to be robust to this situation [Dillies, 2012]. Anyway, we expect these high count features to be the same across replicates. They are not necessarily the same across conditions. Figure 4 and table 4 illustrate the possible presence of such high count features in the data set.

Figure 4: Percentage of reads associated with the sequence having the highest count (provided in each box on the graph) for each sample.

Figure 4: Percentage of reads associated with the sequence having the highest count (provided in each box on the graph) for each sample.

Table 4: Percentage of reads associated with the sequences having the highest counts.
OG2113 OG1638 OG1507 OG1374 OG1241 OG1711
sample_1A 6.39 2.49 1.82 1.55 1.67 1.47
sample_1B 6.00 2.58 1.70 1.69 1.62 1.36
sample_1C 6.24 2.59 1.60 1.82 1.61 1.29
sample_2A 5.32 1.97 1.96 1.13 2.28 1.46
sample_2B 5.26 1.97 1.60 1.17 1.59 1.33
sample_2C 5.39 2.22 1.92 1.31 2.16 1.47
sample_3A 5.60 2.64 1.61 1.50 1.57 1.65
sample_3B 5.60 2.60 1.60 1.42 1.63 1.70
sample_3C 5.78 2.39 1.96 1.36 1.98 1.79
sample_4A 6.21 2.29 1.67 1.64 1.73 1.22
sample_4B 5.77 2.36 2.21 1.36 2.01 1.14
sample_4C 5.96 2.43 1.76 1.98 1.72 1.18
sample_5A 4.66 1.74 1.45 1.08 1.75 1.36
sample_5B 4.73 1.78 1.37 1.19 1.90 1.22
sample_5C 4.98 1.76 2.16 1.09 3.07 1.32
sample_6A 5.09 1.99 1.88 1.23 2.25 1.43
sample_6B 5.59 1.87 2.44 0.91 3.10 1.76
sample_6C 5.69 1.96 2.24 1.07 2.62 1.68

We may wish to assess the similarity between samples across conditions. A pairwise scatter plot is produced (figure 5) to show how replicates and samples from different biological conditions are similar or different (using a log scale). Moreover, as the Pearson correlation has been shown not to be relevant to measure the similarity between replicates, the SERE statistic has been proposed as a similarity index between RNA-Seq samples [5]. It measures whether the variability between samples is random Poisson variability or higher. Pairwise SERE values are printed in the lower triangle of the pairwise scatter plot. The value of the SERE statistic is:

  • 0 when samples are identical (no variability at all: this may happen in the case of a sample duplication);

  • 1 for technical replicates (technical variability follows a Poisson distribution);

  • greater than 1 for biological replicates and samples from different biological conditions (biological variability is higher than technical one, data are over-dispersed with respect to Poisson). The higher the SERE value, the lower the similarity. It is expected to be lower between biological replicates than between samples of different biological conditions. Hence, the SERE statistic can be used to detect inversions between samples.

Figure 5: Pairwise comparison of samples (not produced when more than 12 samples).

Figure 5: Pairwise comparison of samples (not produced when more than 12 samples).

3 Variability within the experiment: data exploration

The main variability within the experiment is expected to come from biological differences between the samples. This can be checked in two ways. The first one is to perform a hierarchical clustering of the whole sample set. This is performed after a transformation of the count data which can be either a Variance Stabilizing Transformation (VST) or a regularized log transformation (rlog) [3,4].

A VST is a transformation of the data that makes them homoscedastic, meaning that the variance is then independent of the mean. It is performed in two steps: (i) a mean-variance relationship is estimated from the data with the same function that is used to normalize count data and (ii) from this relationship, a transformation of the data is performed in order to get a dataset in which the variance is independent of the mean. The homoscedasticity is a prerequisite for the use of some data analysis methods, such as hierarchical clustering or Principal Component Analysis (PCA). The regularized log transformation is based on a GLM (Generalized Linear Model) on the counts and has the same goal as a VST but is more robust in the case when the size factors vary widely.

Figure 6 shows the dendrogram obtained from VST-transformed data. An euclidean distance is computed between samples, and the dendrogram is built upon the Ward criterion. We expect this dendrogram to group replicates and separate biological conditions.

Figure 6: Sample clustering based on normalized data.

Figure 6: Sample clustering based on normalized data.

Another way of visualizing the experiment variability is to look at the first principal components of the PCA, as shown on the figure 7. On this figure, the first principal component (PC1) is expected to separate samples from the different biological conditions, meaning that the biological variability is the main source of variance in the data.

Figure 7: First two components of a Principal Component Analysis, with percentages of variance associated with each axis.

Figure 7: First two components of a Principal Component Analysis, with percentages of variance associated with each axis.

4 Normalization

Normalization aims at correcting systematic technical biases in the data, in order to make read counts comparable across samples. The normalization proposed by DESeq2 relies on the hypothesis that most features are not differentially expressed. It computes a scaling factor for each sample. Normalized read counts are obtained by dividing raw read counts by the scaling factor associated with the sample they belong to. Scaling factors around 1 mean (almost) no normalization is performed. Scaling factors lower than 1 will produce normalized counts higher than raw ones, and the other way around. Two options are available to compute scaling factors: locfunc=“median” (default) or locfunc=“shorth”. Here, the normalization was performed with locfunc=“median”.

Table 5: Normalization factors.
sample_1A sample_1B sample_1C sample_2A sample_2B sample_2C sample_3A sample_3B sample_3C sample_4A sample_4B sample_4C sample_5A sample_5B sample_5C sample_6A sample_6B sample_6C
Size factor 0.8 1.02 0.68 1.36 1.38 0.99 1.24 0.93 1.22 1.03 0.36 1.17 0.84 1.42 1.35 0.85 0.95 1.33

The histograms (figure 8) can help to validate the choice of the normalization parameter (“median” or “shorth”). Under the hypothesis that most features are not differentially expressed, each size factor represented by a red line is expected to be close to the mode of the distribution of the counts divided by their geometric means across samples.

Figure 8: Diagnostic of the estimation of the size factors.

Figure 8: Diagnostic of the estimation of the size factors.

The figure 9 shows that the scaling factors of DESeq2 and the total count normalization factors may not perform similarly.

Figure 9: Plot of the estimated size factors and the total number of reads per sample.

Figure 9: Plot of the estimated size factors and the total number of reads per sample.

Boxplots are often used as a qualitative measure of the quality of the normalization process, as they show how distributions are globally affected during this process. We expect normalization to stabilize distributions across samples. Figure 10 shows boxplots of raw (left) and normalized (right) data respectively.

Figure 10: Boxplots of raw (left) and normalized (right) read counts.

Figure 10: Boxplots of raw (left) and normalized (right) read counts.

5 Differential analysis

5.1 Modelisation

DESeq2 aims at fitting one linear model per feature. For this project, the design used is counts ~ condition and the goal is to estimate the models’ coefficients which can be interpreted as \(\log_2(\texttt{FC})\). These coefficients will then be tested to get p-values and adjusted p-values.

5.2 Outlier detection

Model outliers are features for which at least one sample seems unrelated to the experimental or study design. For every feature and for every sample, the Cook’s distance [6] reflects how the sample matches the model. A large value of the Cook’s distance indicates an outlier count and p-values are not computed for the corresponding feature.

5.3 Dispersions estimation

The DESeq2 model assumes that the count data follow a negative binomial distribution which is a robust alternative to the Poisson law when data are over-dispersed (the variance is higher than the mean). The first step of the statistical procedure is to estimate the dispersion of the data. Its purpose is to determine the shape of the mean-variance relationship. The default is to apply a GLM (Generalized Linear Model) based method (fitType=“parametric”), which can handle complex designs but may not converge in some cases. The alternative is to use fitType=“local” as described in the original paper [3] or fitType=“mean”. The parameter used for this project is fitType=“parametric”. Then, DESeq2 imposes a Cox Reid-adjusted profile likelihood maximization [7] and uses the maximum a posteriori (MAP) of the dispersion [Wu, 2013].

Figure 11: Dispersion estimates (left) and diagnostic of log-normality (right).

Figure 11: Dispersion estimates (left) and diagnostic of log-normality (right).

The left panel on figure 11 shows the result of the dispersion estimation step. The x- and y-axes represent the mean count value and the estimated dispersion respectively. Black dots represent empirical dispersion estimates for each feature (from the observed counts). The red dots show the mean-variance relationship function (fitted dispersion value) as estimated by the model. The blue dots are the final estimates from the maximum a posteriori and are used to perform the statistical test. Blue circles (if any) point out dispersion outliers. These are features with a very high empirical variance (computed from observed counts). These high dispersion values fall far from the model estimation. For these features, the statistical test is based on the empirical variance in order to be more conservative than with the MAP dispersion. These features will have low chance to be declared significant. The figure on the right panel allows to check the hypothesis of log-normality of the dispersions.

5.4 Statistical test for differential expression

Once the dispersion estimation and the model fitting have been done, DESeq2 can perform the statistical testing. Figure 12 shows the distributions of raw p-values computed by the statistical test for the comparison(s) done. This distribution is expected to be a mixture of a uniform distribution on \([0,1]\) and a peak around 0 corresponding to the differentially expressed features.

Figure 12: Distribution(s) of raw p-values.

Figure 12: Distribution(s) of raw p-values.

5.5 Independent filtering

DESeq2 can perform an independent filtering to increase the detection power of differentially expressed features at the same experiment-wide type I error. Since features with very low counts are not likely to see significant differences typically due to high dispersion, it defines a threshold on the mean of the normalized counts irrespective of the biological condition. This procedure is independent because the information about the variables in the design formula is not used [4].

Table 6 reports the thresholds used for each comparison and the number of features discarded by the independent filtering. Adjusted p-values of discarded features are then set to NA.
Table 6: Number of features discarded by the independent filtering for each comparison.
Test vs Ref Threshold # discarded
minus ST vs All 0.69 1
minus 5614 vs All 0.69 1
minus 6086 vs All 0.69 1
minus 10675 vs All 0.69 1
minus SICO vs All 0.69 1
minus 5614 vs minus ST 0.69 1
minus 6086 vs minus ST 0.69 1
minus 10675 vs minus ST 0.69 1
minus SICO vs minus ST 0.69 1
minus 6086 vs minus 5614 0.69 1
minus 10675 vs minus 5614 0.69 1
minus SICO vs minus 5614 22.02 49
minus 10675 vs minus 6086 0.69 1
minus SICO vs minus 6086 0.69 1
minus SICO vs minus 10675 0.69 1

5.6 Final results

A p-value adjustment is performed to take into account multiple testing and control the false positive rate to a chosen level \(\alpha\). For this analysis, a BH p-value adjustment was performed [8] and the level of controlled false positive rate was set to 0.01.

Table 7: Number of up-, down- and total number of differentially expressed features for each comparison.
Test vs Ref # down # up # total
minus ST vs All 176 192 368
minus 5614 vs All 205 150 355
minus 6086 vs All 41 41 82
minus 10675 vs All 294 285 579
minus SICO vs All 211 241 452
minus 5614 vs minus ST 138 74 212
minus 6086 vs minus ST 238 244 482
minus 10675 vs minus ST 278 254 532
minus SICO vs minus ST 165 129 294
minus 6086 vs minus 5614 262 292 554
minus 10675 vs minus 5614 341 342 683
minus SICO vs minus 5614 247 287 534
minus 10675 vs minus 6086 279 237 516
minus SICO vs minus 6086 195 208 403
minus SICO vs minus 10675 123 140 263

Figure 13 represents the MA-plot of the data for the comparisons done, where differentially expressed features are highlighted in red. A MA-plot represents the log ratio of differential expression as a function of the mean intensity for each feature. Triangles correspond to features having a too low/high \(\log_2(\text{FC})\) to be displayed on the plot.

Figure 13: MA-plot(s) of each comparison. Red dots represent significantly differentially expressed features.

Figure 13: MA-plot(s) of each comparison. Red dots represent significantly differentially expressed features.

Figure 14 shows the volcano plots for the comparisons performed and differentially expressed features are still highlighted in red. A volcano plot represents the log of the adjusted P value as a function of the log ratio of differential expression.

Figure 14: Volcano plot(s) of each comparison. Red dots represent significantly differentially expressed features.

Figure 14: Volcano plot(s) of each comparison. Red dots represent significantly differentially expressed features.

Full results as well as lists of differentially expressed features are provided in the following text files which can be easily read in a spreadsheet. For each comparison:

  • TestVsRef.complete.txt contains results for all the features;
  • TestVsRef.up.txt contains results for significantly up-regulated features. Features are ordered from the most significant adjusted p-value to the less significant one;
  • TestVsRef.down.txt contains results for significantly down-regulated features. Features are ordered from the most significant adjusted p-value to the less significant one.

These files contain the following columns:

  • Id: unique feature identifier;
  • sampleName: raw counts per sample;
  • norm.sampleName: rounded normalized counts per sample;
  • baseMean: base mean over all samples;
  • All, minus ST, minus 5614, minus 6086, minus 10675 and minus SICO: means (rounded) of normalized counts of the biological conditions;
  • FoldChange: fold change of expression, calculated as \(2^{\log_2(\text{FC})}\);
  • log2FoldChange: \(\log_2(\text{FC})\) as estimated by the GLM model. It reflects the differential expression between Test and Ref and can be interpreted as \(\log_2(\frac{\text{Test}}{\text{Ref}})\). If this value is:
    • around 0: the feature expression is similar in both conditions;
    • positive: the feature is up-regulated (\(\text{Test} > \text{Ref}\));
    • negative: the feature is down-regulated (\(\text{Test} < \text{Ref}\));
  • stat: Wald statistic for the coefficient tested;
  • pvalue: raw p-value from the statistical test;
  • padj: adjusted p-value on which the cut-off \(\alpha\) is applied;
  • dispGeneEst: dispersion parameter estimated from feature counts (i.e. black dots on figure 11);
  • dispFit: dispersion parameter estimated from the model (i.e. red dots on figure 11);
  • dispMAP: dispersion parameter estimated from the Maximum A Posteriori model;
  • dispersion: final dispersion parameter used to perform the test (i.e. blue dots and circles on figure 11);
  • betaConv: convergence of the coefficients of the model (TRUE or FALSE);
  • maxCooks: maximum Cook’s distance of the feature.

6 R session information and parameters

The versions of the R software and Bioconductor packages used for this analysis are listed below. It is important to save them if one wants to re-perform the analysis in the same conditions.

  • R version 3.6.3 (2020-02-29), x86_64-apple-darwin15.6.0
  • Locale: en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
  • Running under: macOS Mojave 10.14.6
  • Matrix products: default
  • BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
  • LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
  • Base packages: base, datasets, graphics, grDevices, methods, parallel, stats, stats4, utils
  • Other packages: Biobase 2.46.0, BiocGenerics 0.32.0, BiocParallel 1.20.1, DelayedArray 0.12.2, DESeq2 1.26.0, devtools 2.2.2, edgeR 3.28.1, GenomeInfoDb 1.22.0, GenomicRanges 1.38.0, ggplot2 3.3.0, IRanges 2.20.2, kableExtra 1.1.0, limma 3.42.2, matrixStats 0.55.0, RColorBrewer 1.1-2, S4Vectors 0.24.3, SARTools 1.7.2, SummarizedExperiment 1.16.1, usethis 1.5.1
  • Loaded via a namespace (and not attached): acepack 1.4.1, annotate 1.64.0, AnnotationDbi 1.48.0, assertthat 0.2.1, backports 1.1.5, base64enc 0.1-3, bit 1.1-15.2, bit64 0.9-7, bitops 1.0-6, blob 1.2.1, callr 3.4.2, checkmate 2.0.0, cli 2.0.2, cluster 2.1.0, colorspace 1.4-1, compiler 3.6.3, crayon 1.3.4, data.table 1.12.8, DBI 1.1.0, desc 1.2.0, digest 0.6.25, dplyr 0.8.5, ellipsis 0.3.0, evaluate 0.14, fansi 0.4.1, farver 2.0.3, foreign 0.8-76, Formula 1.2-3, fs 1.3.2, genefilter 1.68.0, geneplotter 1.64.0, GenomeInfoDbData 1.2.2, GGally 1.4.0, ggdendro 0.1-20, ggrepel 0.8.2, glue 1.3.1, grid 3.6.3, gridExtra 2.3, gtable 0.3.0, highr 0.8, Hmisc 4.3-1, hms 0.5.3, htmlTable 1.13.3, htmltools 0.4.0, htmlwidgets 1.5.1, httr 1.4.1, jpeg 0.1-8.1, knitr 1.28, labeling 0.3, lattice 0.20-40, latticeExtra 0.6-29, lifecycle 0.2.0, locfit 1.5-9.1, magrittr 1.5, MASS 7.3-51.5, Matrix 1.2-18, memoise 1.1.0, munsell 0.5.0, nnet 7.3-13, pillar 1.4.3, pkgbuild 1.0.6, pkgconfig 2.0.3, pkgload 1.0.2, plyr 1.8.6, png 0.1-7, prettyunits 1.1.1, processx 3.4.2, ps 1.3.2, purrr 0.3.3, R6 2.4.1, Rcpp 1.0.3, RCurl 1.98-1.1, readr 1.3.1, remotes 2.1.1, reshape 0.8.8, rlang 0.4.5, rmarkdown 2.1, rpart 4.1-15, rprojroot 1.3-2, RSQLite 2.2.0, rstudioapi 0.11, rvest 0.3.5, scales 1.1.0, sessioninfo 1.1.1, splines 3.6.3, stringi 1.4.6, stringr 1.4.0, survival 3.1-11, testthat 2.3.2, tibble 2.1.3, tidyselect 1.0.0, tools 3.6.3, vctrs 0.2.4, viridisLite 0.3.0, webshot 0.5.2, withr 2.1.2, xfun 0.12, XML 3.99-0.3, xml2 1.2.5, xtable 1.8-4, XVector 0.26.0, yaml 2.2.1, zlibbioc 1.32.0

Parameter values used for this analysis are:

  • workDir: /Users/dkinkj/OneDrive - Chr Hansen/InKj/Analysis/metaT/Diff_Expression_OG_3core
  • projectName: FoodTrans_metaT_milk_OG_3core
  • author: InKj
  • targetFile: target.txt
  • rawDir: OG_count_samples_core
  • featuresToRemove: alignment_not_unique, ambiguous, no_feature, not_aligned, too_low_aQual
  • varInt: condition
  • condRef: All
  • batch: NULL
  • fitType: parametric
  • cooksCutoff: TRUE
  • independentFiltering: TRUE
  • alpha: 0.01
  • pAdjustMethod: BH
  • typeTrans: VST
  • locfunc: median
  • colors: #8B535D, #BE939B, #456C6F, #85B0B3, #B08B53, #DAC19B, #59596E, #9898AC, #245776, #5BA2CD

Bibliography

1. R Core Team. R: A language and environment for statistical computing. Vienna, Austria : R Foundation for Statistical Computing, 2017 :

2. Gentleman RC, Carey VJ, Bates DM et al. Bioconductor: Open software development for computational biology and bioinformatics. Genome Biology 2004 ; 5: R80.

3. Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biology 2010 ; 11: R106.

4. Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for rna-seq data with deseq2. Genome Biology 2014 ; 15: 550.

5. Schulze SK, Kanwar R, Gölzenleuchter M et al. SERE: Single-parameter quality control and sample comparison for rna-seq. BMC Genomics 2012 ; 13: 524.

6. Cook RD. Detection of influential observation in linear regression. Technometrics 1977 ; 19: 15–18.

7. Cox DR, Reid N. Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society. Series B (Methodological) 1987 ; 49: 1–39.

8. Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological) 1995 ; 57: 289–300.